49 research outputs found

    Economic Factors of Vulnerability Trade and Exploitation

    Full text link
    Cybercrime markets support the development and diffusion of new attack technologies, vulnerability exploits, and malware. Whereas the revenue streams of cyber attackers have been studied multiple times in the literature, no quantitative account currently exists on the economics of attack acquisition and deployment. Yet, this understanding is critical to characterize the production of (traded) exploits, the economy that drives it, and its effects on the overall attack scenario. In this paper we provide an empirical investigation of the economics of vulnerability exploitation, and the effects of market factors on likelihood of exploit. Our data is collected first-handedly from a prominent Russian cybercrime market where the trading of the most active attack tools reported by the security industry happens. Our findings reveal that exploits in the underground are priced similarly or above vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle of exploits is slower than currently often assumed. On the other hand, cybercriminals are becoming faster at introducing selected vulnerabilities, and the market is in clear expansion both in terms of players, traded exploits, and exploit pricing. We then evaluate the effects of these market variables on likelihood of attack realization, and find strong evidence of the correlation between market activity and exploit deployment. We discuss implications on vulnerability metrics, economics, and exploit measurement.Comment: 17 pages, 11 figures, 14 table

    Towards Realistic Threat Modeling: Attack Commodification, Irrelevant Vulnerabilities, and Unrealistic Assumptions

    Full text link
    Current threat models typically consider all possible ways an attacker can penetrate a system and assign probabilities to each path according to some metric (e.g. time-to-compromise). In this paper we discuss how this view hinders the realness of both technical (e.g. attack graphs) and strategic (e.g. game theory) approaches of current threat modeling, and propose to steer away by looking more carefully at attack characteristics and attacker environment. We use a toy threat model for ICS attacks to show how a realistic view of attack instances can emerge from a simple analysis of attack phases and attacker limitations.Comment: Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defens

    A preliminary analysis of vulnerability scores for attacks in wild

    Get PDF
    NVD and Exploit-DB are the de facto standard databases used for research on vulnerabilities, and the CVSS score is the standard measure for risk. On open question is whether such databases and scores are actually representative of at- tacks found in the wild. To address this question we have constructed a database (EKITS) based on the vulnerabili- ties currently used in exploit kits from the black market and extracted another database of vulnerabilities from Symantec's Threat Database (SYM). Our nal conclusion is that the NVD and EDB databases are not a reliable source of in- formation for exploits in the wild, even after controlling for the CVSS and exploitability subscore. An high or medium CVSS score shows only a signi cant sensitivity (i.e. prediction of attacks in the wild) for vulnerabilities present in exploit kits (EKITS) in the black market. All datasets ex- hibit a low speci city

    My Software has a Vulnerability, should I worry?

    Get PDF
    (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.Comment: 12 pages, 4 figure

    MalwareLab: Experimentation with Cybercrime Attack Tools

    Get PDF
    Cybercrime attack tools (i.e. Exploit Kits) are reportedly responsible for the majority of attacks affecting home users. Exploit kits are traded in the black markets at different prices and advertising different capabilities and functionalities. In this paper we present our experimental approach in testing 10 exploit kits leaked from the markets that we deployed in an isolated environment, our MalwareLab. The purpose of this experiment is to test these tools in terms of resiliency against changing software configurations in time. We present our experiment design and implementation, discuss challenges, lesson learned and open problems, and present a preliminary analysis of the results

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    You Can Tell a Cybercriminal by the Company they Keep: A Framework to Infer the Relevance of Underground Communities to the Threat Landscape

    Full text link
    The criminal underground is populated with forum marketplaces where, allegedly, cybercriminals share and trade knowledge, skills, and cybercrime products. However, it is still unclear whether all marketplaces matter the same in the overall threat landscape. To effectively support trade and avoid degenerating into scams-for-scammers places, underground markets must address fundamental economic problems (such as moral hazard, adverse selection) that enable the exchange of actual technology and cybercrime products (as opposed to repackaged malware or years-old password databases). From the relevant literature and manual investigation, we identify several mechanisms that marketplaces implement to mitigate these problems, and we condense them into a market evaluation framework based on the Business Model Canvas. We use this framework to evaluate which mechanisms `successful' marketplaces have in place, and whether these differ from those employed by `unsuccessful' marketplaces. We test the framework on 23 underground forum markets by searching 836 aliases of indicted cybercriminals to identify `successful' marketplaces. We find evidence that marketplaces whose administrators are impartial in trade, verify their sellers, and have the right economic incentives to keep the market functional are more likely to be credible sources of threat.Comment: The 22nd Workshop on the Economics of Information Security (WEIS'23), July 05--08, 2023, Geneva, Switzerlan

    A Bug Bounty Perspective on the Disclosure of Web Vulnerabilities

    Get PDF
    Bug bounties have become increasingly popular in recent years. This paper discusses bug bounties by framing these theoretically against so-called platform economy. Empirically the interest is on the disclosure of web vulnerabilities through the Open Bug Bounty (OBB) platform between 2015 and late 2017. According to the empirical results based on a dataset covering nearly 160 thousand web vulnerabilities, (i) OBB has been successful as a community-based platform for the dissemination of web vulnerabilities. The platform has also attracted many productive hackers, (ii) but there exists a large productivity gap, which likely relates to (iii) a knowledge gap and the use of automated tools for web vulnerability discovery. While the platform (iv) has been exceptionally fast to evaluate new vulnerability submissions, (v) the patching times of the web vulnerabilities disseminated have been long. With these empirical results and the accompanying theoretical discussion, the paper contributes to the small but rapidly growing amount of research on bug bounties. In addition, the paper makes a practical contribution by discussing the business models behind bug bounties from the viewpoints of platforms, ecosystems, and vulnerability markets.Comment: 17th Annual Workshop on the Economics of Information Security, Innsbruck, https://weis2018.econinfosec.org

    The Work-Averse Cyber Attacker Model: Theory and Evidence From Two Million Attack Signatures

    Get PDF
    The assumption that a cyber attacker will potentially exploit all present vulnerabilities drives most modern cyber risk management practices and the corresponding security investments. We propose a new attacker model, based on dynamic optimization, where we demonstrate that large, initial, fixed costs of exploit development induce attackers to delay implementation and deployment of exploits of vulnerabilities. The theoretical model predicts that mass attackers will preferably i) exploit only one vulnerability per software version, ii) largely include only vulnerabilities requiring low attack complexity, and iii) be slow at trying to weaponize new vulnerabilities. These predictions are empirically validated on a large dataset of observed massed attacks launched against a large collection of information systems. Findings in this paper allow cyber risk managers to better concentrate their efforts for vulnerability management, and set a new theoretical and empirical basis for further research defining attacker (offensive) processes
    corecore